36 research outputs found
A law of the iterated logarithm sublinear expectations
In this paper, motivated by the notion of independent identically distributed
(IID) random variables under sub-linear expectations initiated by Peng, we
investigate a law of the iterated logarithm for capacities. It turns out that
our theorem is a natural extension of the Kolmogorov and the Hartman-Wintner
laws of the iterated logarithm
On limit theorems for continued fractions
It is shown that for sums of functionals of digits in continued fraction
expansion the Kolmogorov-Feller weak laws of large numbers and the
Khinchine-L\'evy-Feller-Raikov characterization of the domain of attraction of
the normal law hold.Comment: 16 page
Support Vector Machine Implementations for Classification & Clustering
BACKGROUND: We describe Support Vector Machine (SVM) applications to classification and clustering of channel current data. SVMs are variational-calculus based methods that are constrained to have structural risk minimization (SRM), i.e., they provide noise tolerant solutions for pattern recognition. The SVM approach encapsulates a significant amount of model-fitting information in the choice of its kernel. In work thus far, novel, information-theoretic, kernels have been successfully employed for notably better performance over standard kernels. Currently there are two approaches for implementing multiclass SVMs. One is called external multi-class that arranges several binary classifiers as a decision tree such that they perform a single-class decision making function, with each leaf corresponding to a unique class. The second approach, namely internal-multiclass, involves solving a single optimization problem corresponding to the entire data set (with multiple hyperplanes). RESULTS: Each SVM approach encapsulates a significant amount of model-fitting information in its choice of kernel. In work thus far, novel, information-theoretic, kernels were successfully employed for notably better performance over standard kernels. Two SVM approaches to multiclass discrimination are described: (1) internal multiclass (with a single optimization), and (2) external multiclass (using an optimized decision tree). We describe benefits of the internal-SVM approach, along with further refinements to the internal-multiclass SVM algorithms that offer significant improvement in training time without sacrificing accuracy. In situations where the data isn't clearly separable, making for poor discrimination, signal clustering is used to provide robust and useful information – to this end, novel, SVM-based clustering methods are also described. As with the classification, there are Internal and External SVM Clustering algorithms, both of which are briefly described
Comprehensive models of diffuse interstellar clouds: physical conditions and molecular abundances
The limitations of steady state models of interstellar clouds are explored by means of comparison with observational data corresponding to clouds in front of Zeta Per, Zeta Oph, Chi Oph, and Omicron Per. The improved cloud models were constructed to reproduce the observed H and H2(J) column densities for several lines of sight. The main difference from previous models is the treatment of self-shielding in the H2 lines. Other improvements over previous models are discussed as well